Goto

Collaborating Authors

 actual problem


Design and Implementation of a Tool for Extracting Uzbek Syllables

Salaev, Ulugbek, Kuriyozov, Elmurod, Matlatipov, Gayrat

arXiv.org Artificial Intelligence

The accurate syllabification of words plays a vital role in various Natural Language Processing applications. Syllabification is a versatile linguistic tool with applications in linguistic research, language technology, education, and various fields where understanding and processing language is essential. In this paper, we present a comprehensive approach to syllabification for the Uzbek language, including rule-based techniques and machine learning algorithms. Our rule-based approach utilizes advanced methods for dividing words into syllables, generating hyphenations for line breaks and count of syllables. Additionally, we collected a dataset for evaluating and training using machine learning algorithms comprising word-syllable mappings, hyphenations, and syllable counts to predict syllable counts as well as for the evaluation of the proposed model. Our results demonstrate the effectiveness and efficiency of both approaches in achieving accurate syllabification. The results of our experiments show that both approaches achieved a high level of accuracy, exceeding 99%. This study provides valuable insights and recommendations for future research on syllabification and related areas in not only the Uzbek language itself, but also in other closely-related Turkic languages with low-resource factor.


Rockafellian Relaxation and Stochastic Optimization under Perturbations

Royset, Johannes O., Chen, Louis L., Eckstrand, Eric

arXiv.org Artificial Intelligence

In practice, optimization models are often prone to unavoidable inaccuracies due to dubious assumptions and corrupted data. Traditionally, this placed special emphasis on risk-based and robust formulations, and their focus on ``conservative" decisions. We develop, in contrast, an ``optimistic" framework based on Rockafellian relaxations in which optimization is conducted not only over the original decision space but also jointly with a choice of model perturbation. The framework enables us to address challenging problems with ambiguous probability distributions from the areas of two-stage stochastic optimization without relatively complete recourse, probability functions lacking continuity properties, expectation constraints, and outlier analysis. We are also able to circumvent the fundamental difficulty in stochastic optimization that convergence of distributions fails to guarantee convergence of expectations. The framework centers on the novel concepts of exact and limit-exact Rockafellians, with interpretations of ``negative'' regularization emerging in certain settings. We illustrate the role of Phi-divergence, examine rates of convergence under changing distributions, and explore extensions to first-order optimality conditions. The main development is free of assumptions about convexity, smoothness, and even continuity of objective functions. Numerical results in the setting of computer vision and text analytics with label noise illustrate the framework.


We need to avoid a 'ready, fire, aim!' approach to AI regulation

FOX News

Sam Altman, the CEO of artificial intelligence lab OpenAI, told a Senate panel he welcomes federal regulation on the technology "to mitigate" its risks. The panic to regulate artificial intelligence (AI) came almost immediately after last fall's release of ChatGPT popularized the technology with the public. Some industry insiders themselves called for a pause on development, highlighting that expertise in a field doesn't translate into proficiency in the perils of regulation. That appeal was followed by a White House AI Bill of Rights and an educational effort by Senate Majority Leader Chuck Schumer, D-N.Y. Fears about AI include job displacement, data security and privacy, misinformation, autonomous defense systems mistakes, discrimination and bias, and an existential threat to humanity itself. It's imperative to prove actual market failure before regulating and to make sure the costs of doing so don't outweigh the benefits.


Consistent Approximations in Composite Optimization

Royset, Johannes O.

arXiv.org Machine Learning

Approximations of optimization problems arise in computational procedures and sensitivity analysis. The resulting effect on solutions can be significant, with even small approximations of components of a problem translating into large errors in the solutions. We specify conditions under which approximations are well behaved in the sense of minimizers, stationary points, and level-sets and this leads to a framework of consistent approximations. The framework is developed for a broad class of composite problems, which are neither convex nor smooth. We demonstrate the framework using examples from stochastic optimization, neural-network based machine learning, distributionally robust optimization, penalty and augmented Lagrangian methods, interior-point methods, homotopy methods, smoothing methods, extended nonlinear programming, difference-of-convex programming, and multi-objective optimization. An enhanced proximal method illustrates the algorithmic possibilities.


Different Ways To Master Quantum Machine Learning

#artificialintelligence

I did not have the fortune to take a quantum computing class in college. Not to speak of a class in quantum machine learning. At the time, it wouldn't have been much fun anyway. In the early 2000s, quantum computing was just about to take the step from a pure theory to be evaluated in research labs. It was a field for theoretical physicists and mathematicians. At the time, I haven't even heard about it.